653 research outputs found

    A reaction diffusion-like formalism for plastic neural networks reveals dissipative solitons at criticality

    Get PDF
    Self-organized structures in networks with spike-timing dependent plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the non-linearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime meta-stable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    Equilibrium and Response Properties of the Integrate-and-Fire Neuron in Discrete Time

    Get PDF
    The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron's equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input

    The perfect integrator driven by Poisson input and its approximation in the diffusion limit

    Get PDF
    In this note we consider the perfect integrator driven by Poisson process input. We derive its equilibrium and response properties and contrast them to the approximations obtained by applying the diffusion approximation. In particular, the probability density in the vicinity of the threshold differs, which leads to altered response properties of the system in equilibrium.Comment: 7 pages, 3 figures, v2: corrected authors in referenc

    Structural Plasticity Controlled by Calcium Based Correlation Detection

    Get PDF
    Hebbian learning in cortical networks during development and adulthood relies on the presence of a mechanism to detect correlation between the presynaptic and the postsynaptic spiking activity. Recently, the calcium concentration in spines was experimentally shown to be a correlation sensitive signal with the necessary properties: it is confined to the spine volume, it depends on the relative timing of pre- and postsynaptic action potentials, and it is independent of the spine's location along the dendrite. NMDA receptors are a candidate mediator for the correlation dependent calcium signal. Here, we present a quantitative model of correlation detection in synapses based on the calcium influx through NMDA receptors under realistic conditions of irregular pre- and postsynaptic spiking activity with pairwise correlation. Our analytical framework captures the interaction of the learning rule and the correlation dynamics of the neurons. We find that a simple thresholding mechanism can act as a sensitive and reliable correlation detector at physiological firing rates. Furthermore, the mechanism is sensitive to correlation among afferent synapses by cooperation and competition. In our model this mechanism controls synapse formation and elimination. We explain how synapse elimination leads to firing rate homeostasis and show that the connectivity structure is shaped by the correlations between neighboring inputs

    Surrogate Spike Train Generation Through Dithering in Operational Time

    Get PDF
    Detecting the excess of spike synchrony and testing its significance can not be done analytically for many types of spike trains and relies on adequate surrogate methods. The main challenge for these methods is to conserve certain features of the spike trains, the two most important being the firing rate and the inter-spike interval statistics. In this study we make use of operational time to introduce generalizations to spike dithering and propose two novel surrogate methods which conserve both features with high accuracy. Compared to earlier approaches, the methods show an improved robustness in detecting excess synchrony between spike trains

    PyNEST: A Convenient Interface to the NEST Simulator

    Get PDF
    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used

    Criteria on Balance, Stability, and Excitability in Cortical Networks for Constraining Computational Models

    Get PDF
    During ongoing and Up state activity, cortical circuits manifest a set of dynamical features that are conserved across these states. The present work systematizes these phenomena by three notions: excitability, the ability to sustain activity without external input; balance, precise coordination of excitatory and inhibitory neuronal inputs; and stability, maintenance of activity at a steady level. Slice preparations exhibiting Up states demonstrate that balanced activity can be maintained by small local circuits. While computational models of cortical circuits have included different combinations of excitability, balance, and stability, they have done so without a systematic quantitative comparison with experimental data. Our study provides quantitative criteria for this purpose, by analyzing in-vitro and in-vivo neuronal activity and characterizing the dynamics on the neuronal and population levels. The criteria are defined with a tolerance that allows for differences between experiments, yet are sufficient to capture commonalities between persistently depolarized cortical network states and to help validate computational models of cortex. As test cases for the derived set of criteria, we analyze three widely used models of cortical circuits and find that each model possesses some of the experimentally observed features, but none satisfies all criteria simultaneously, showing that the criteria are able to identify weak spots in computational models. The criteria described here form a starting point for the systematic validation of cortical neuronal network models, which will help improve the reliability of future models, and render them better building blocks for larger models of the brain

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)
    corecore